skip to main content


Search for: All records

Creators/Authors contains: "Yang, Kezhou"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Despite the promise of superior efficiency and scalability, real‐world deployment of emerging nanoelectronic platforms for brain‐inspired computing have been limited thus far, primarily because of inter‐device variations and intrinsic non‐idealities. In this work, mitigation of these issues is demonstrated by performing learning directly on practical devices through a hardware‐in‐loop approach, utilizing stochastic neurons based on heavy metal/ferromagnetic spin–orbit torque heterostructures. The probabilistic switching and device‐to‐device variability of the fabricated devices of various sizes is characterized to showcase the effect of device dimension on the neuronal dynamics and its consequent impact on network‐level performance. The efficacy of the hardware‐in‐loop scheme is illustrated in a deep learning scenario achieving equivalent software performance. This work paves the way for future large‐scale implementations of neuromorphic hardware and realization of truly autonomous edge‐intelligent devices.

     
    more » « less
  2. Astrocytes play a central role in inducing concerted phase synchronized neural-wave patterns inside the brain. In this article, we demonstrate that injected radio-frequency signal in underlying heavy metal layer of spin-orbit torque oscillator neurons mimic the neuron phase synchronization effect realized by glial cells. Potential application of such phase coupling effects is illustrated in the context of a temporal “binding problem.” We also present the design of a coupled neuron-synapse-astrocyte network enabled by compact neuromimetic devices by combining the concepts of local spike-timing dependent plasticity and astrocyte induced neural phase synchrony. 
    more » « less
  3. null (Ed.)
    Brain-inspired cognitive computing has so far followed two major approaches - one uses multi-layered artificial neural networks (ANNs) to perform pattern-recognition-related tasks, whereas the other uses spiking neural networks (SNNs) to emulate biological neurons in an attempt to be as efficient and fault-tolerant as the brain. While there has been considerable progress in the former area due to a combination of effective training algorithms and acceleration platforms, the latter is still in its infancy due to the lack of both. SNNs have a distinct advantage over their ANN counterparts in that they are capable of operating in an event-driven manner, thus consuming very low power. Several recent efforts have proposed various SNN hardware design alternatives, however, these designs still incur considerable energy overheads.In this context, this paper proposes a comprehensive design spanning across the device, circuit, architecture and algorithm levels to build an ultra low-power architecture for SNN and ANN inference. For this, we use spintronics-based magnetic tunnel junction (MTJ) devices that have been shown to function as both neuro-synaptic crossbars as well as thresholding neurons and can operate at ultra low voltage and current levels. Using this MTJ-based neuron model and synaptic connections, we design a low power chip that has the flexibility to be deployed for inference of SNNs, ANNs as well as a combination of SNN-ANN hybrid networks - a distinct advantage compared to prior works. We demonstrate the competitive performance and energy efficiency of the SNNs as well as hybrid models on a suite of workloads. Our evaluations show that the proposed design, NEBULA, is up to 7.9× more energy efficient than a state-of-the-art design, ISAAC, in the ANN mode. In the SNN mode, our design is about 45× more energy-efficient than a contemporary SNN architecture, INXS. Power comparison between NEBULA ANN and SNN modes indicates that the latter is at least 6.25× more power-efficient for the observed benchmarks. 
    more » « less